Important Update: Please note that the results of this test were actually inconclusive. I didn’t realize this when I posted it and I didn’t figure it out until it was pointed out in the comments. Please disregard the results of this test and see my new post explaining what went wrong and what I am doing about it. I am leaving this post up as it is my personal policy to not delete data.
Update: Based on some excellent feedback in the comments (Seriously, thank you everyone!) I have updated the post with some clarifications and more added data. Specifically, I added a diagram of the page setup and removed a confusing comment I made about Javascript links.
Β As SEOmoz has matured as a company, our SEO team has shifted away from treating SEO purely as an art and more toward treating it as a science. There is certainly the necessity for both perspectives but I believe we are now much more centered.
As a result of this shift, we have been running more tests and analyzing more data. Before I get into the topic of our latest test results, let me provide some important points to establish context.
- There is overwhelming evidence that from a “ROI on time spent working” perspective, there is much more value in link building and creating content that is link-worthy than obsessing over search engine algorithm fluctuations like PageRank sculpting. Link building is human oriented and thus more inline with the long term goals of the search engines. Links also have the added bonus of being easy to measure and thus easier to prioritize.
- We canβt directly measure how PageRank flows so we can only infer results. This needs to be acknowledged when interpreting test results. That said, we also canβt directly measure objects outside our solar system and this solution of inference has become the basis for modern Astronomy. (If it is good enough for NASA, it is good enough for SEOmoz ;-p)
The Experiment
We chose the following five PageRank sculpting methods to test:
Rel=βnofollowβ – The standard mechanism for nofollowing a link. example
Link Consolidation – Consolidating low priority pages. You can read more about link consolidation here.
Iframe – Include a standard link in an iframe that is blocked via robots.txt or meta robots so engines can’t follow it.
Javascript – An external Javascript file (blocked from robots) that inserts links into divs when the page renders.
Control Case – Null test with standard links.
Page Setup
We then built five standardized websites that used these different methods (one used iframes for its test links, another one used Javascript for its test links, etc..) and included one normal link with the anchor text of a phrase that was completely unique on the Internet.
Each website in the experiment used the same template. Each keyword phrase was targeted in the same place on each page and each page had the same amount of images, text and links.
The standardized website layout contained:
FourThree pages per domain (the homepage and the keyword specific content pages)- One internal inlink per page (Links in content)
- One inlink to homepage from third party site
- Six total outbound links.
- Two “junk” links to popular website articles to mimic natural linking profile (old Digg articles)
- One normal link to keyword test page
- Three modified links (according to given test) to three separate pages optimized for given keyword
- Links to internal pages only came from internal links
- The internal links used the anchor text (random English phrase) that was optimized for the given internal page
- Outbound links (aka “junk” links) used anchor text that was the same as the title tag of the external page being linked to (Old Digg articles)
Example Test Website
Please note that the above example was NOT actually used. I provided a fake example to maintain the integrity of the testing platform for future tests.
The experiment variables were:
- links (based on experiment type)
- colors
- photos (although alt text was standardized)
- text (randomized text based on proper English grammar using a standardized word-set)
We then did everything we could to make sure that all of these pages received the same amount of link juice from external sources.
The null result would be a random assortment of experiment types ranking in the SERPs.
The alt result would be one experiment type outranking all of the others.
Redundancy
We then duplicated this experiment eight times in parallel. This meant 40 different domains, 40 different IP addresses, 8 different WHOIS records, 8 different hosting providers and 8 different payment methods. (We then went outside and drank)
We ran this test for 2 months.
The Results
PageRank Sculpting Method | Average Rank in Google |
Nofollow | 2.4 |
Link Consolidation | 3.0 |
Iframe | 3.1 |
Javascript | 3.2 |
Control Case | 3.2 |
Β
Rank | Test 1 | Test 2 | Test 3 | Test 4 | Test 5 | Test 6 | Test 7 | Test 8 |
---|---|---|---|---|---|---|---|---|
1. | nofollow | nofollow | control | nofollow | consolidation | iframe | nofollow | control |
2. | javascript | iframe | javascript | consolidation | iframe | consolidation | consolidation | iframe |
3. | consolidation | javascript | nofollow | iframe | nofollow | control | control | javascript |
4. | control | control | consolidation | javascript | javascript | javascript | javascript | nofollow |
5. | iframe | consolidation | iframe | control | control | nofollow | iframe | consolidation |
Please see my update at the top of this post. These test results are actually inconclusive.
As you can see, the nofollow method ranked an average of 1 place higher (0.7) in the SERPs than the control result. This is significant when you realize the total is out of 5.
It appears that the iframe method and link consolidation were slightly effective but the margin was so small that they could be contributed to error.
The Javascript method did not work at all.
The Bottom Line
Despite what the search engine representatives say, nofollow is still an effective way for sculpting PageRank. If you have nofollow sculpting already installed, donβt remove it. If you donβt have it installed, implementing it probably wonβt make a drastic change but we encourage you to test this when it is responsible to do so.
I invite you to share your interpretation of these results in the comments below. As with any experiment, these results are not valid unless they can be reproduced and stand up to the critique of others. What should we do differently in future experiments?